Safety-critical Autonomous Systems require trustworthy and transparent decision-making process to be deployable in the real world. The advancement of Machine Learning introduces high performance but largely through black-box algorithms. We focus the discussion of explainability specifically with Autonomous Vehicles (AVs). As a safety-critical system, AVs provide the unique opportunity to utilize cutting-edge Machine Learning techniques while requiring transparency in decision making. Interpretability in every action the AV takes becomes crucial in post-hoc analysis where blame assignment might be necessary. In this paper, we provide positioning on how researchers could consider incorporating explainability and interpretability into design and optimization of separate Autonomous Vehicle modules including Perception, Planning, and Control.
translated by 谷歌翻译
机器人定位是使用地图和传感器测量结果找到机器人姿势的反问题。近年来,可逆神经网络(INNS)成功地解决了各个领域的模棱两可的反问题。本文提出了一个解决旅馆本地化问题的框架。我们设计了一个在逆路径中提供隐式映射表示形式的旅馆。通过对评估中的潜在空间进行采样,局部\ _inn输出机器人以协方差构成,可用于估计不确定性。我们表明,本地\ _inn的本地化性能与延迟较低的当前方法相当。我们使用训练集的外观显示了从本地\ _inn的详细的2D和3D地图重建。我们还使用本地\ _inn提供了全球本地化算法来解决绑架问题。
translated by 谷歌翻译
深度强化学习(DRL)是一种仅从演示和经验中学习机器人控制政策的有前途的方法。为了涵盖机器人的整个动态行为,DRL训练是通常在仿真环境中得出的主动探索过程。尽管这种模拟培训廉价且快速,但将DRL算法应用于现实世界的设置很困难。如果对代理进行训练直到它们在模拟中安全执行,则由于模拟动力学和物理机器人之间的差异引起的SIM到真实差距,将其传输到物理系统很困难。在本文中,我们提出了一种在线培训DRL代理的方法,可以使用基于模型的安全主管在实体车辆上自动驾驶。我们的解决方案使用监督系统检查代理选择的操作是安全还是不安全,并确保在车辆上始终采取安全措施。这样,我们可以在安全,快速,有效地训练DRL算法的同时绕过SIM到现实的问题。我们提供各种现实世界实验,在线培训一辆小型实体车辆,可以自动驾驶,没有事先模拟培训。评估结果表明,我们的方法在未崩溃的同时提高了样品效率的训练代理,并且受过训练的代理比在模拟中训练的代理表现出更好的驾驶性能。
translated by 谷歌翻译
尽管机器人学课程在高等教育方面已建立,但这些课程通常专注于理论,有时缺乏对开发,部署和将软件应用于真实硬件的技术的系统覆盖。此外,大多数用于机器人教学的硬件平台是针对中学水平的年轻学生的低级玩具。为了解决这一差距,开发了一个自动驾驶汽车硬件平台,称为第1 f1 f1tth,用于教授自动驾驶系统。本文介绍了以“赛车”和替换考试的竞赛为主题的各种教育水平教学模块和软件堆栈。第1辆车提供了一个模块化硬件平台及其相关软件,用于教授自动驾驶算法的基础知识。从基本的反应方法到高级计划算法,教学模块通过使用第1辆车的自动驾驶来增强学生的计算思维。第1辆汽车填补了研究平台和低端玩具车之间的空白,并提供了学习自主系统中主题的动手经验。多年的四所大学为他们的学期本科和研究生课程采用了教学模块。学生反馈用于分析第1个平台的有效性。超过80%的学生强烈同意,硬件平台和模块大大激发了他们的学习,而超过70%的学生强烈同意,硬件增强了他们对学科的理解。调查结果表明,超过80%的学生强烈同意竞争激励他们参加课程。
translated by 谷歌翻译
自主赛车奖的代理商对反对者的行为做出反应,并以敏捷的操纵向沿着赛道前进,同时惩罚过度侵略性和过度保守的代理商。了解其他代理的意图对于在对抗性多代理环境中部署自主系统至关重要。当前的方法要么过分简化代理的动作空间的离散化,要么无法识别行动的长期影响并成为近视。我们的工作重点是应对这两个挑战。首先,我们提出了一种新颖的降低方法,该方法封装了不同的代理行为,同时保留了代理作用的连续性。其次,我们将两种代理赛车游戏制定为遗憾的最小化问题,并通过遗憾的预测模型为可行的反事实遗憾最小化提供了解决方案。最后,我们在规模的自动驾驶汽车上实验验证了我们的发现。我们证明,使用拟议的游戏理论规划师使用代理表征与客观空间显着提高了对不同对手的获胜率,并且在看不见的环境中,改进可以转移到看不见的对手。
translated by 谷歌翻译
当前的图形神经网络(GNNS)遇到了过度光滑的问题,这导致无法区分的节点表示和较低的模型性能,并具有更多的GNN层。近年来已经提出了许多方法来解决这个问题。但是,现有的解决过度平滑的方法强调模型性能并忽略节点表示的过度平滑度。一次采用另外一种方法,同时缺乏整体框架​​来共同利用多个解决方案来解决过度光滑的挑战。为了解决这些问题,我们提出了Grato,这是一个基于神经体系结构搜索的框架,以自动搜索GNNS体系结构。 Grato采用新颖的损失功能,以促进模型性能和表示平滑度之间的平衡。除了现有方法外,我们的搜索空间还包括DropAttribute,这是一种减轻过度光滑挑战的新计划,以充分利用各种解决方案。我们在六个现实世界数据集上进行了广泛的实验,以评估Grato,这表明Grato在过度平滑的指标中的表现优于基准,并在准确性方面取得了竞争性能。 Grato在GNN层数量增加的情况下特别有效且健壮。进一步的实验确定了通过grato学习的节点表示的质量和模型架构的有效性。我们在Github(\ url {https://github.com/fxsxjtu/grato})上提供Grato的CIDE。
translated by 谷歌翻译
Twitter机器人检测已成为打击错误信息,促进社交媒体节制并保持在线话语的完整性的越来越重要的任务。最先进的机器人检测方法通常利用Twitter网络的图形结构,在面对传统方法无法检测到的新型Twitter机器人时,它们表现出令人鼓舞的性能。但是,现有的Twitter机器人检测数据集很少是基于图形的,即使这些基于图形的数据集也遭受有限的数据集量表,不完整的图形结构以及低注释质量。实际上,缺乏解决这些问题的大规模基于图的Twitter机器人检测基准,严重阻碍了基于图形的机器人检测方法的开发和评估。在本文中,我们提出了Twibot-22,这是一个综合基于图的Twitter机器人检测基准,它显示了迄今为止最大的数据集,在Twitter网络上提供了多元化的实体和关系,并且与现有数据集相比具有更好的注释质量。此外,我们重新实施35代表性的Twitter机器人检测基线,并在包括Twibot-22在内的9个数据集上进行评估,以促进对模型性能和对研究进度的整体了解的公平比较。为了促进进一步的研究,我们将所有实施的代码和数据集巩固到Twibot-22评估框架中,研究人员可以在其中始终如一地评估新的模型和数据集。 Twibot-22 Twitter机器人检测基准和评估框架可在https://twibot22.github.io/上公开获得。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译